22 research outputs found

    A proof of concept on real-time air quality monitoring system

    Get PDF
    According to the Department of Environment Malaysia, Air Pollutant Index (API) is an indicator for the air quality status at any particular area. API is calculated based on average concentration of air pollutants such as Sulphur Dioxide (SO2), Nitrogen Dioxide (NO2), Carbon Monoxide (CO), ground level Ozone (O3) and particulate matter (PM10), each over their respective period of averaging time. A sub-index for each pollutant is determined separately based on a predetermined standard and the highest sub- index is used as the API (Department of Environment Malaysia, 2000). The United States Environmental Protection Agency (US EPA) (2010) defines PM10 as "Inhalable coarse particles," such as those found near roadways and dusty industries, are larger than 2.5 micrometers and smaller than 10 micrometers in diameter (United States Environmental Protection Agency, 2010). Typically, the concentration PM10 is the highest among all pollutants in Malaysia and therefore it determines the API value (APIMS DOE Malaysia, 2015)

    Attributes sanitization in object-oriented design to improve database structure

    Get PDF
    Modelling using the Entity Relationship Model was introduced more than thirty years ago. Until late 1990’s, object-oriented introduced class diagram. However, designing a good database is still a serious issue. Some of the issues are very difficult to handle such as consistency checking between system design and database design, redundancy of data, mismatch of the data structure with the user’s needs in the database and unused data in the database. In this thesis, a new technique called UInData is introduced as an alternative method for designing database based on attribute sanitization. The proposed technique will extract class behaviour from class diagram to produce schema table which will then be compared with the user interface to normalize the structure. Attributes sanitization is introduced to remove the unused attributes and to provide final schema table. An experiment using three case studies has shown that some improvements of designing optimal database have been achieved in term of data sanitization and data accessibility. Attribute sanitization was applied in LAS, SPKS and MPBP database. Data sanitizations have removed 2.2%, 14.1% and 24.5% from defined attributes which are not used by user interface. Meanwhile the results shown in data accessibility for these three cases have shown that LAS was reduced by 50%, SPKS have not reducing of data accessibility and MPBP was reduced by 20% when UInData is used as compared to using ordinary object-oriented. Therefore, the UInData is a good alternative technique to improve database structur

    Lecture assessment system (manage student attendance module and manage marks module)

    Get PDF
    Kolej Universiti Teknologi Tun Hussein Onn (KUiTTHO) had been awarded the Quality Management System Standards ISO 9001: 2000. Unfortunately, there are still some problems on its student's assessment. At the end of a semester, mostly happen certain students had no marks but they had registered on the Student Information System. When comes to the end of the semester, there is too late to solve this problems. Its happens because of the students register the subject but they did not attend the class and sometime they attended on other class which is the same subject. In certain case, the lecturer miss transfer the students mark to the computer form. The management had introduced a general format to the lecturer to keep the students assessment record. After implement this format, the same problem still happened but the number of cases is reducing. The Lecture Assessment System becomes an alternative way to overcome this problem. Through this system, the management can monitor the students' progress marks with their attendance. Furthermore, the Lecture Assessment System is a web-based application

    Comparative Analysis of Mice Protein Expression: Clustering and Classification Approach

    Get PDF
    The mice protein expression dataset was created to study the effect of learning between normal and trisomic mice or mice with Down Syndrome (DS). The extra copy of a normal chromosome in DS is believed to be the cause that alters the normal pathways and normal responses to stimulation, causing learning and memory deficits. This research attempts to analyze the protein expression dataset on protein influences that could have affected the recovering ability to learn among the trisomic mice. Two data mining tasks are employed; clustering and classification analysis. Clustering analysis via K-Means, Hierarchical Clustering, and Decision Tree have been proven useful to identify common critical protein responses, which in turn helping in identifying potentially more effective drug targets. Meanwhile, all classification models including the k-Nearest Neighbor, Random Forest, and Naive Bayes have efficiently classifies protein samples into the given eight classes with very high accuracy

    A hybrid word embedding model based on admixture of poisson-gamma latent dirichlet allocation model and distributed word-document- topic representation

    Get PDF
    This paper proposes a hybrid Poisson-Gamma Latent Dirichlet Allocation (PGLDA) model designed for modelling word dependencies to accommodate the semantic representation of words. The new model simultaneously overcomes the shortcomings of complexity by using LDA as the baseline model as well as adequately capturing the words contextual correlation. The Poisson document length distribution was replaced with the admixture of Poisson-Gamma for words correlation modelling when there is a hub word that connects words and topics. Furthermore, the distributed representation of documents (Doc2Vec) and topics (Topic2Vec) vectors are then averaged to form new vectors of words representation to be combined with topics with largest likelihood from PGLDA. Model estimation was achieved by combining the Laplacian approximation of log-likelihood for PGLDA and Feed-Forward Neural Network (FFN) approaches of Doc2Vec and Topic2Vec. The proposed hybrid method was evaluated for precision, recall, and F1 score based on 20 Newsgroups and AG’s News datasets. Comparative analysis of F1 score showed that the proposed hybrid model outperformed other methods

    Predicting Handling Covid-19 Opinion using Naive Bayes and TF-IDF for Polarity Detection

    Get PDF
    There are many public responses about implementing government policies related to Covid-19. Some have positive and negative opinions, especially on the official social media portal of the government. Twitter is one social media where people are free to express their opinions. This study aims to find out the opinion of sentiment analysis on Twitter in implementing government policies related to Covid-19 to classify public opinion. Several stages in analyzing public sentiment are taken from the tweet data. The first step is data mining to get the tweets that will be analyzed later. Furthermore, cleaning tweet data and equalizing tweet data into lowercase. After that, perform the tweet's basic word search process and calculate its appearance frequency. Then calculate using the Naïve Bayes method and determine the sentiment classification of the tweet. The results showed that Indonesia's public sentiment about covid-19 prevention is neutral. The performance of the application shows an Accuracy value of 76.7%.  In conclusion this means that the Indonesian government needs to evaluate the policies taken to deal with COVID-19 to create positive opinions to create solid cooperation between the government and the government. Residents in tackling the COVID-19 outbreak

    Comparative analysis of classification algorithms for chronic kidney disease diagnosis

    Get PDF
    Chronic Kidney Disease (CKD) is one of the leading cause of death contributed by other illnesses such as diabetes, hypertension, lupus, anemia or weak bones that lead to bone fractures. Early prediction of CKD is important in order to contain the disesase. However, instead of predicting the severity of CKD, the objective of this paper is to predict the diagnosis of CKD based on the symptoms or attributes observed in a particular case, whether the stage is acute or chronic. To achieve this, a classification model is proposed to label stage of severity for kidney diseases patients. The experiments then investigated the performance of the proposed classification model based on eight supervised classification algorithms, which are ZeroR, Rule Induction, Support Vector Machine, Naïve Bayes, Decision Tree, Decision Stump, k-Nearest Neighbour, and Classification via Regression. The performance of the all classifiers is evaluated based on accuracy, precision, and recall. The results showed that the regression classifier perform best in the kidney diagnostic procedure

    Comparative Analysis of Data Redundancy and Execution Time between Relational and Object-Oriented Schema Table

    Get PDF
    Database design is one of the important phases in designing software because database is where the data is stored inside the system. One of the most popular techniques used in database design is the relational technique, which focuses on entity relationship diagram and normalization. The relational technique is useful for eliminating data redundancy because normalization produces normal forms on the schema tables. The second technique is the object-oriented technique, which focuses on class diagram and generating schema tables. An advantage of object-oriented technique is its close implementation to programming languages like C++ or Java. This paper is set to compare the performance of both relational and object-oriented techniques in terms of solving data redundancy during the database design phase as well as measuring query execution time. The experimental results based on a course database case study traced 186 redundant records using the relational technique and 204 redundant records when using the object-oriented technique. The query execution time measured was 46.75ms and 31.75ms for relational and object-oriented techniques, respectively

    Sistem pengurusan syukor batik menggunakan aplikasi mudah alih

    Get PDF
    Sistem Pengurusan Syukor Batik Mengggunakan Aplikasi Mudah Alih ini dibangunkan bagi tujuan membantu pihak kedai Syukor Batik menguruskan aktiviti penjualan dan memudahkan pelanggan untuk membuat tempahan secara atas talian. Sistem manual yang sedia ada menyukarkan pihak kedai untuk berurusan dengan pelanggan melalui mesej sembang nyata. Hal ini membebankan pihak kedai untuk mengambil pengesahan tempahan dan bayaran pelanggan kerana wujud isu maklum balas yang lambat, masalah komunikasi serta menghadapi isu terputus sambungan talian. Wujud kesukaran dalam merekod butiran pelanggan, tempahan dan rekod jualan yang tepat apabila ianya hanya dicatat di dalam buku harian sahaja. Pihak kedai juga sukar untuk membuat pengemaskinian bilangan stok semasa kepada pelanggan. Kerana bilangan stok akan sentiasa berubah pada setiap masa dan pengemaskinian bilangan stok semasa akan dikemaskini dengan cara tidak produktif. Oleh itu, Sistem Pengurusan Syukor Batik Menggunakan Aplikasi Mudah Alih ini dibangunkan bagi membantu pengurusan jualan dan rekod serta memudahkan pelanggan untuk membuat tempahan secara atas talian. Sistem ini dibangunkan dengan bahasa pengaturcaraan Java melalui perisian Android Studio dan Firebase, yang berpandukan model Object-oriented Systems Development Llife Cycle (OOSDLC). Sistem ini terbahagi kepada dua bahagian aplikasi yang berbeza namun saling berkait. Setiap pelanggan yang ingin membuat tempahan secara atas talian akan menggunakan bahagian aplikasi mudah alih. Manakala, untuk memudahkan proses pengurusan, pihak kedai akan menggunakan aplikasi laman web bagi menguruskan segala aktiviti jualan dan pelanggan daripada aplikasi mudah alih. Pembangunan Sistem Pengurusan Syukor Batik Menggunakan Aplikasi Mudah Alih ini diharapkan dapat membantu pihak kedai meminimumkan kelemahan pengurusan jualan dan permasalahan yang ada sebelum ini dengan produkti

    Comparative study of football team rating system using elo rating and pi-rating for Switzerland Super League

    Get PDF
    A sports rating system is a system that analyses the results of sports competitions to provide ratings for each team or player. Usually, in a football match, the audience will predict which team will win based on the goals scored by the team at half-time or penalty. This prediction is important because when evaluating match results, it is important to first compare the potential strength of the teams involved in the match. Due to this, the main goal of this research is to compare the performance of the team rating system using Elo Rating and Pi Rating when forecasting match outcomes in association football. The well-known Elo Rating system is used to calculate team ratings, whereas a Pi Rating is used to predict the football match results based on a team’s performance to win the match when playing home or when playing away. Two different techniques are used to generate forecasts. Both types of models can be used to generate pre-game forecasts. The Elo ratings worked better when predicting matches from a large data set. The Pi Rating system applies to any other sport where the score is considered as a good indicator for prediction purposes, as well as determining the relative performances between adversaries. Data used in this study focuses on the dataset football match by Switzerland Super League. The dataset from the Football-Data.co.uk website is a dataset composed of around 1421 data of matches of the Switzerland Super League. The research figures out the classification model based on the Decision Forest classifier is an effective classifier with 68% f-measure for Pi Rating and 73% for f�measure Elo Rating. Therefore, Elo Rating is the best team rating system to predict football competitions
    corecore